215 research outputs found
Intimate interfaces in action: assessing the usability and subtlety of emg-based motionless gestures
Mobile communication devices, such as mobile phones and networked personal digital assistants (PDAs), allow users to be constantly connected and communicate anywhere and at any time, often resulting in personal and private communication taking place in public spaces. This private -- public contrast can be problematic. As a remedy, we promote intimate interfaces: interfaces that allow subtle and minimal mobile interaction, without disruption of the surrounding environment. In particular, motionless gestures sensed through the electromyographic (EMG) signal have been proposed as a solution to allow subtle input in a mobile context. In this paper we present an expansion of the work on EMG-based motionless gestures including (1) a novel study of their usability in a mobile context for controlling a realistic, multimodal interface and (2) a formal assessment of how noticeable they are to informed observers. Experimental results confirm that subtle gestures can be profitably used within a multimodal interface and that it is difficult for observers to guess when someone is performing a gesture, confirming the hypothesis of subtlety
Multi-User Framework for Collaboration and Co-Creation in Virtual Reality
Presented as a poster.We present CocoVerse, a shared immersive virtual reality environment in which users interact with each other and create and manipulate virtual objects using a set of hand-based tools. Simple, intuitive interfaces make the application easy to use, and its flexible toolset facilitates constructivist and exploratory learning. The modular design of the system allows it to be easily customized for new room-scale applications
Multimodal Inductive Transfer Learning for Detection of Alzheimer's Dementia and its Severity
Alzheimer's disease is estimated to affect around 50 million people worldwide
and is rising rapidly, with a global economic burden of nearly a trillion
dollars. This calls for scalable, cost-effective, and robust methods for
detection of Alzheimer's dementia (AD). We present a novel architecture that
leverages acoustic, cognitive, and linguistic features to form a multimodal
ensemble system. It uses specialized artificial neural networks with temporal
characteristics to detect AD and its severity, which is reflected through
Mini-Mental State Exam (MMSE) scores. We first evaluate it on the ADReSS
challenge dataset, which is a subject-independent and balanced dataset matched
for age and gender to mitigate biases, and is available through DementiaBank.
Our system achieves state-of-the-art test accuracy, precision, recall, and
F1-score of 83.3% each for AD classification, and state-of-the-art test root
mean squared error (RMSE) of 4.60 for MMSE score regression. To the best of our
knowledge, the system further achieves state-of-the-art AD classification
accuracy of 88.0% when evaluated on the full benchmark DementiaBank Pitt
database. Our work highlights the applicability and transferability of
spontaneous speech to produce a robust inductive transfer learning model, and
demonstrates generalizability through a task-agnostic feature-space. The source
code is available at https://github.com/wazeerzulfikar/alzheimers-dementiaComment: To appear in INTERSPEECH 202
Investigating Social Presence and Communication with Embodied Avatars in Room-Scale Virtual Reality
Submission includes video.Room-scale virtual reality (VR) holds great potential as a medium for communication and collaboration in remote and same-time, same-place settings. Related work has established that movement realism can create a strong sense of social presence, even in the absence of photorealism. Here, we explore the noteworthy attributes of communicative interaction using embodied minimal avatars in room-scale VR in the same-time, same-place setting. Our system is the first in the research community to enable this kind of interaction, as far as we are aware. We carried out an experiment in which pairs of users performed two activities in contrasting variants: VR vs. face-to-face (F2F), and 2D vs. 3D. Objective and subjective measures were used to compare these, including motion analysis, electrodermal activity, questionnaires, retrospective think-aloud protocol, and interviews. On the whole, participants communicated effectively in VR to complete their tasks, and reported a strong sense of social presence. The system's high fidelity capture and display of movement seems to have been a key factor in supporting this. Our results confirm some expected shortcomings of VR compared to F2F, but also some non-obvious advantages. The limited anthropomorphic properties of the avatars presented some difficulties, but the impact of these varied widely between the activities. In the 2D vs. 3D comparison, the basic affordance of freehand drawing in 3D was new to most participants, resulting in novel observations and open questions. We also present methodological observations across all conditions concerning the measures that did and did not reveal differences between conditions, including unanticipated properties of the think-aloud protocol applied to VR
- …